Compiler Optimizations for Massively ParallelMachines : Transformations on Iterative
نویسندگان
چکیده
This paper presents a set of compiler optimizations and their application strategies for a common class of data parallel loop nests. The arrays updated in the body of the loop nests are assumed to be partitioned into blocks (rectangular, rows, or columns) where each block is assigned to a processor. These optimizations are demonstrated in the context of a FORTRAN-90 compiler with very encouraging preliminary results. In the case of solving tridiagonal systems by Gaussian Elimination, the performance of the optimized native code is two orders of magnitude better than the CM-FORTRAN compiler and approaching that of the hand-written Connection Machine Scienti c Library (CMSSL) routine.
منابع مشابه
A FORTRAN Compiling Method for Dataflow Machines and Its Prototype Compiler for the Parallel Processing System -Harray-
A Hierarchical Parallelizing Compiler for VLIW/MIMD Machines p. 49 Dynamic Dependence Analysis: A Novel Method for Data Dependence Evaluation p. 64 On the Feasibility of Dynamic Partitioning of Pointer Structures p. 82 Compiler Analysis for Irregular Problems in Fortran D p. 97 Data Ensembles in Orca C p. 112 Compositional C++: Compositional Parallel Programming p. 124 Data Parallelism and Lind...
متن کاملUsing Iterative Compilation to Reduce Energy Consumption
The rapid range of architectural changes in processors puts compiler technology under an enormous stress. This is emphasized by new demands added to compilers, like reducing static code size, energy consumption or power dissipation. Iterative compilation has been proposed as an approach to find the best sequence of optimizations (such as loop transformations) for an application, in order to imp...
متن کاملSuperoptimization in LLVM
Superoptimization is a known technique to integrate the analyses and transformations of a number of separate optimizations in order to obtain an optimization that is more expressive than the sequential and iterative application of the original optimizations. This paper describes the elaboration of this technique within the Low Level Virtual Machine (LLVM) Compiler Infrastructure. A framework su...
متن کاملTowards Identifying and Monitoring Optimization Impacts
Optimizing compilers apply code-improving transformations in phases over a source program in an eeort to emit the fastest or most compact executable code possible. The eeectiveness of these optimizations is limited by phase-ordering problems, manifested as interactions among optimizations and the attending impact on system resource utilization. Incorporating into each optimization module an awa...
متن کاملA Structured Approach to Proving Compiler Optimizations Based on Dataflow Analysis
This paper reports on the correctness proof of compiler optimizations based on data-flow analysis. We formulate the optimizations and analyses as instances of a general framework for data-flow analyses and transformations, and prove that the optimizations preserve the behavior of the compiled programs. This development is a part of a larger effort of certifying an optimizing compiler by proving...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1993